6 research outputs found

    Off-Street Vehicular Fog for Catering Applications in 5G/B5G: A Trust-based Task Mapping Solution and Open Research Issues

    Get PDF
    One of the key enablers in serving the applications requiring stringent latency in 5G networks is fog computing as it is situated closer to the end users. With the technological advancement of vehicles’ on-board units, their computing capabilities are becoming robust, and considering the underutilization of the off-street vehicles, we envision that the off-street vehicles can be an enormously useful computational source for the fog computing. Additionally, clustering the vehicles would be advantageous in order to improve the service availability. As the vehicles become highly connected, trust is needed especially in distributed environments. However, vehicles are made from different manufacturers, and have different platforms, security mechanisms, and varying parking duration. These lead to the unpredictable behavior of the vehicles where quantifying trust value of vehicles would be difficult. A trust-based solution is necessary for task mapping as a task has a set of properties including expected time to complete, and trust requirements that need to be met. However, the existing metrics used for trust evaluation in the vehicular fog computing such as velocity and direction are not applicable in the off-street vehicle fog environments. In this paper, we propose a framework for quantifying the trust value of off-street vehicle fog computing facilities in 5G networks and forming logical clusters of vehicles based on the trust values. This allows tasks to be shared with multiple vehicles in the same cluster that meets the tasks’ trust requirements. Further, we propose a novel task mapping algorithm to increase the vehicle resource utilization and meet the desired trust requirements while maintaining imposed latency requirements of 5G applications. Results obtained using iFogSim simulator demonstrate that the proposed solution increases vehicle resource utilization and reduces task drop noticeably. This paper presents open research issues pertaining to the study to lead..

    An Efficient Resource Management Mechanism for Network Slicing in LTE Network

    Get PDF
    The proliferation of mobile devices and user applications has continued to contribute to the humongous volume of data traffic in cellular networks. To surmount this challenge, service and resource providers are looking for alternative mechanisms that can successfully facilitate managing network resources in a more dynamic, predictive and distributed manner. New concepts of network architectures such as Software Defined Network (SDN) and Network Function Virtualization (NFV) have paved the way to move from static to flexible networks. They make networks more flexible (i.e. network providers capable of on-demand provisioning), easily customizable and cost effective. In this regard, network slicing is emerging as a new technology built on the concepts of SDN and NFV. It splits a network infrastructure into isolated virtual networks and allows them to manage resources allocation individually based on their requirements and characteristics. Most of the existing solutions for network slicing are computationally expensive because of the length of time they require to estimate the resources required for each isolated slice. In addition, there is no guarantee that the resource allocation is fairly shared among users in a slice. In this paper, we propose a Network Slicing Resource Management (NSRM) mechanism to assign the required resources for each slice in an LTE network, taking into consideration resources isolation between different slices. In addition, NSRM aims to ensure isolation and fair sharing of distributed bandwidths between users belonging to the same slice. In NSRM, depending on requirements, each slice can be customized (e.g. each can have a different scheduling policy)

    Human face detection techniques: A comprehensive review and future research directions

    Get PDF
    Face detection which is an effortless task for humans are complex to perform on machines. Recent veer proliferation of computational resources are paving the way for a frantic advancement of face detection technology. Many astutely developed algorithms have been proposed to detect faces. However, there is a little heed paid in making a comprehensive survey of the available algorithms. This paper aims at providing fourfold discussions on face detection algorithms. At first, we explore a wide variety of available face detection algorithms in five steps including history, working procedure, advantages, limitations, and use in other fields alongside face detection. Secondly, we include a comparative evaluation among different algorithms in each single method. Thirdly, we provide detailed comparisons among the algorithms epitomized to have an all inclusive outlook. Lastly, we conclude this study with several promising research directions to pursue. Earlier survey papers on face detection algorithms are limited to just technical details and popularly used algorithms. In our study, however, we cover detailed technical explanations of face detection algorithms and various recent sub-branches of neural network. We present detailed comparisons among the algorithms in all-inclusive and also under sub-branches. We provide strengths and limitations of these algorithms and a novel literature survey including their use besides face detection

    Find My Trustworthy Fogs: A Fuzzy-based Trust Evaluation Framework

    Get PDF
    The growth of IoT is proven with the massive amount of data generated in 2015, and expected to be even more in the years to come. Relying on the cloud to meet the expanding volume, variety, and velocity of data that the IoT generates may not be feasible. In the last two years, fog computing has become a considerably important research topic in an attempt to reduce the burden on cloud and solve cloud's inability to meet the IoT latency requirement. However, fog environment is different than in cloud since fog environment is far more distributed. Due to the dynamic nature of fog, backups such as redundant power supply would deem unnecessary, and relying on just one Internet Service Provider for their fog device would be sufficient. If obstacles arise in this fog environment, factors such as latency, availability or reliability would in turn be unstable. Fogs become harder to trust, and this issue is more complicated and challenging in comparison to the conventional cloud. This implies that trustworthiness in fog is an imperative issue that needs to be addressed. With the help of a broker, managing trust in a distributive environment can be tackled. Acting as an intermediary, a broker helps in facilitating negotiation between two parties. Although the brokering concept has been around for a long time and is widely used in the cloud, it is a new concept in fog computing. As of late, there are several research studies that incorporates broker in fog where these brokers focus towards pricing management. However to the best of our knowledge there is no literature on broker-based trust evaluation in fog service allocation. This is the first work that proposes broker-based trust evaluation framework that focuses on identifying a trustworthy fog to fulfi ll the user requests. In this paper, fuzzy logic is used as the basis for the evaluation while considering the availability and cost of fog. We propose Request Matching algorithm to identify a user request, and Fuzzy-based Filtering algorithm to match the request with one of the predefi ned sets created and managed by the broker. In this paper, we present a use case that illustrates how fuzzy logic works in determining the trustworthiness of a fog. Our findings suggest that the algorithms can successfully provide users a trustworthy fog that matches their requirement

    Green Demand Aware Fog Computing: A Prediction-based Dynamic Resource Provisioning Approach

    No full text
    Fog computing has emerged and can potentially be the next paradigm shift by extending cloud services to the edge of the network, bringing resources closer to end user. With its close proximity to end users and distributed nature, latency can be significantly reduced. With the appearance of more and more latency stringent applications, in the near future, we will witness an unprecedented amount of demand for fog computing. Undoubtedly, this will lead to the rise of energy footprint of the network edge and access segment. To reduce energy consumption in fog computing while not compromising their performance, this paper proposes Green Demand Aware Fog Computing (GDAFC) solution. Our solution uses a prediction technique to identify the working fog nodes (nodes serve when request arrives), standby fog nodes (nodes take over when the computational capacity of the working fog nodes is no longer sufficient), and idle fog nodes in a fog computing infrastructure. Additionally, it assigns an appropriate sleep interval for the fog nodes, taking into account the delay requirement of the applications. Results obtained based on the mathematical formulation show our solution can save energy up to 65% without deteriorating the delay requirement performance

    Background Traffic Load Aware Video Class-lecture Client Admission in a Bandwidth Constrained Campus Network

    No full text
    Video class-lecture streaming is regarded as a popular means in improving quality of teaching and learning in schools and universities. Several research findings reveal that, recorded lecture videos (streamed over the Internet) are a useful supplement to non-classroom learning. Despite knowing this importance, some schools or universities are reluctant to use video lecture streaming service in their campus network, thinking video streaming service would impose additional traffic load in their network. In fact, in a bandwidth constrained campus network, other regular traffic flows may experience lower throughput, packet drop and delay due to presence of class lecture video streaming traffic, resulting in deteriorating Quality of Experience (QoE) of campus users. In this paper, we propose video streaming service model for the bandwidth constrained campus networks. We refer to our solution as Class Lecture on Demand (CLD) service that can be easily adopted in a campus. CLD de nes the policies for admitting number of clients that request for video stream- ing service taking into account peak hour and o -peak hour background tra c load. This paper provides a detailed procedures, showing how a network administrator in a bandwidth constrained campus network can measure the maximum number class lecture streaming requests that a video streaming server should accommodated at di erent part of a day without affecting other traffic flows. Additionally, we provide an insightful discussion (policies) in order to make video lecture streaming in a bandwidth constrained campus network easily adopted
    corecore